From Theory to Practice: AI Governance Insights from Hong Kong

 This reflection captures key insights from the AI for Business conference in Hong Kong, hosted by HKU Business School in collaboration with global research institutions, and situates them within my ongoing work on AI governance and the ADOR framework.

Conference Reflection: AI for Business – Hong Kong

At the beginning of this year, I had the opportunity to travel to Hong Kong to attend the AI for Business conference. The conference brought together students, researchers, and practitioners presenting thesis work and applied research on how artificial intelligence is reshaping business, governance, and society. Across sessions, the discussions moved beyond technical performance to examine AI’s broader economic, cultural, and ethical implications—highlighting both its transformative potential and its systemic risks.

The AI for Business conference was hosted and supported by a strong consortium of academic and research institutions committed to advancing responsible, interdisciplinary AI scholarship. The event was led by HKU Business School (University of Hong Kong) in collaboration with the Institute of Digital Economy and Innovation (IDEI) and the AI Evaluation Lab (AIEL), reflecting Hong Kong’s growing role as a global hub for AI research and business innovation.

The conference also benefited from international academic partnership with Oxford Saïd Business School and the Oxford Human–AI Interaction Lab (HAI Lab). Their involvement reinforced the conference’s emphasis on human-centered AI, governance, and ethical evaluation—bridging perspectives from Asia, Europe, and global industry practice. Together, these institutions created a rigorous platform for dialogue at the intersection of artificial intelligence, business strategy, public policy, and societal impact.

1. AI Agents, Algorithmic Personalization, and Market Concentration

One of the key themes explored was how AI agents drive algorithmic personalization in digital advertising markets. In mature markets such as the United States, programmatic advertising—largely powered by AI-driven automation—accounts for approximately 88–91% of digital display ad spending, underscoring the scale at which algorithmic decision-making already operates (Amra & Elma, 2025; Insivia, 2024). While competition among platforms appears to improve efficiency, conference discussions emphasized how similar optimization goals and shared data structures lead AI systems to converge, producing algorithmic monoculture.

This convergence can reduce informational diversity and reinforce winner-take-most dynamics, as dominant firms benefit from data network effects that continually improve model performance and raise barriers to entry. From a social welfare perspective, the concern is not the absence of competition, but rather competitive convergence that limits meaningful consumer choice while amplifying concentration risks.

2. When AI Meets Culture: Cultural Avatars and User Engagement

Another key topic examined how AI systems interact with culture, particularly through the use of cultural avatars in AI chatbots. These systems are designed to reflect linguistic, social, and cultural norms, shaping how users experience trust, familiarity, and relevance. Research discussed at the conference indicated that culturally adaptive AI can improve user engagement and trust by 15–30% in diverse or multilingual contexts, particularly in service and support environments.

However, the discussions also highlighted risks when cultural representation is reduced to static or commercialized traits, potentially reinforcing bias or stereotyping. As conversational AI adoption accelerates—supported by evidence that over 50% of working-age adults in the United States have used generative AI tools—the governance of culturally adaptive systems becomes increasingly important (Federal Reserve Bank of St. Louis, 2025). Ethical design, ongoing evaluation, and human oversight were repeatedly emphasized as safeguards in high-autonomy AI environments.

3. The Disruptive Power of AI in Scientific Collaboration: The AlphaFold Example

The conference also examined the disruptive impact of AI on scientific knowledge creation, using AlphaFold as a leading example. AlphaFold has already been used by over three million researchers worldwide and has generated predictions for hundreds of millions of protein structures, dramatically reducing discovery timelines that previously spanned years (Nature, 2022; Quantumrun, 2024). This shift is reshaping collaboration by enabling scientists to build upon shared AI-generated outputs rather than siloed datasets.

At the same time, speakers stressed that subject-matter experts remain essential to validate, contextualize, and govern AI-generated knowledge. As reliance on AI outputs grows, expert oversight becomes critical to ensure scientific rigor, prevent misinterpretation, and manage dependency on high-impact AI infrastructure. This balance between acceleration and accountability emerged as a recurring governance challenge.

4. Dynamic AI–Human Co-Learning in Service Operations

Another topic explored dynamic AI–human co-learning in service operations, focusing on how organizations balance learning through experimentation with reputational risk. Conference discussions referenced a two-stage model: early exploratory learning supported by human review, followed by controlled deployment with restricted feedback loops. While over 80% of organizations report piloting AI in customer-facing services, fewer than one-third have successfully scaled these systems, largely due to concerns around reliability, trust, and brand risk (McKinsey & Company, 2023).

In highly visible service environments, excessive AI experimentation can expose organizations to reputational harm. The conference emphasized governance mechanisms—such as escalation protocols, monitoring systems, and accountability structures—as essential tools for aligning AI learning processes with organizational responsibility.

5. Aligning Large Language Models with Human Decision-Making

The final theme addressed the challenge of aligning large language models (LLMs) with human decision-making in complex, interactive environments. While LLMs offer unprecedented capabilities in information synthesis and contextual response, misalignment can occur when model objectives diverge from human values, judgment, or situational nuance. This risk is amplified in high-autonomy contexts such as cybersecurity, finance, and healthcare.

In the United States, consumer adoption of generative AI is already widespread, yet enterprise-level alignment remains uneven (Federal Reserve Bank of St. Louis, 2025). Speakers emphasized the importance of human-in-the-loop oversight, interpretability, and structured governance frameworks—such as the NIST AI Risk Management Framework—to ensure responsible deployment and accountability (NIST, 2023; OWASP, 2023).

Poster Presentation: The ADOR Framework

As part of the conference, I presented my poster introducing the ADOR framework, which provides a governance-oriented approach to AI adoption by emphasizing accountability, decision oversight, and responsible outcomes. The feedback from faculty, students, and practitioners reinforced the relevance of leadership-led governance models in navigating the ethical, operational, and societal implications of AI. The conference experience strengthened my understanding of how AI systems must be guided not only by technical performance, but by human values and institutional responsibility.

References (Oxford / Harvard Style)

Amra & Elma (2025) Programmatic advertising statistics and trends. Available at: https://www.amraandelma.com/top-programmatic-advertising-statistics-2025/

Federal Reserve Bank of St. Louis (2025) The state of generative AI adoption in the United States. Available at: https://www.stlouisfed.org/on-the-economy/2025/nov/state-generative-ai-adoption-2025

Insivia (2024) Programmatic advertising statistics. Available at: https://www.insivia.com/programmatic-advertising-statistics/

McKinsey & Company (2023) The state of AI in 2023: Generative AI’s breakout year. Available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023

Nature (2022) Highly accurate protein structure prediction with AlphaFold. Nature, 596, pp. 583–589.

National Institute of Standards and Technology (NIST) (2023) AI Risk Management Framework (AI RMF 1.0). Available at: https://www.nist.gov/itl/ai-risk-management-framework

OWASP (2023) Top 10 risks for large language model applications. Available at: https://owasp.org/www-project-top-10-for-large-language-model-applications/

Quantumrun (2024) AlphaFold 2: statistics, impact, and future implications. Available at: https://www.quantumrun.com/consulting/alphafold-2-statistics/ (Accessed: [insert date]).

 

These discussions directly inform my ongoing research and the development of the ADOR framework, which focuses on accountability, decision oversight, and responsible AI outcomes.